-
Notifications
You must be signed in to change notification settings - Fork 51
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
add additional columns to constraints #108
add additional columns to constraints #108
Conversation
Signed-off-by: Sertac Ozercan <sozercan@gmail.com>
Codecov Report
@@ Coverage Diff @@
## master #108 +/- ##
==========================================
+ Coverage 43.55% 43.86% +0.31%
==========================================
Files 27 27
Lines 2404 2414 +10
==========================================
+ Hits 1047 1059 +12
+ Misses 1037 1036 -1
+ Partials 320 319 -1
Flags with carried forward coverage won't be shown. Click here to find out more.
Continue to review full report at Codecov.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is awesome!
It occurs to me that we need to be intentional in terms of how we roll this out to GK
If we simply vendor this code in and push, the old GK pods and the new ones will fight over the CRD for the duration that both Pods are simultaneously in existence.
This is probably relatively harmless as this period should be relatively brief (the time it takes a Deployment to roll out the change), and I think the fights are non-destructive, adding QPS to the API server, but this should be tested.
As an alternative, we can push out a GK update that ignores diffs caused by AdditionalPrinterColumns
, then wait for that to go out before vendoring this in.
It would take longer to roll out, but would avoid the contention.
+1 |
Any thoughts on how would we test the QPS to the API server and how do we ignore diffs in GK? |
Testing would involve manually halting an upgrade mid-deployment. I think this can be simulated by just creating 3 Gatekeeper Pods (without a deployment) and seeing what happens to the traffic. Or having two deployments and slowly cross-fading between the two. As to how we can ignore diffs... it's a 2 release dance: Release 1: Release 2: That way, release two will add the new feature where missing and release 1 wont try to revert it. |
What happens if users go directly to release 2 and skip release 1? |
There will be potential write contention between the two versions. The impact would have to be tested, but I believe it's just increased QPS to the API server. A more dangerous possibility would be CRs getting deleted because the CRD changed, but I don't think K8s does that. This model is basically a database refactoring. It rhymes with Martin Fowler's Transition phase. |
4914286
to
c4e3ea8
Compare
Signed-off-by: Will Beason <willbeason@gmail.com>
Signed-off-by: Will Beason <willbeason@google.com>
Signed-off-by: Will Beason <willbeason@gmail.com>
c4e3ea8
to
d3b3455
Compare
Codecov Report
@@ Coverage Diff @@
## master #108 +/- ##
==========================================
+ Coverage 44.91% 45.18% +0.26%
==========================================
Files 59 59
Lines 2783 2793 +10
==========================================
+ Hits 1250 1262 +12
+ Misses 1290 1289 -1
+ Partials 243 242 -1
Flags with carried forward coverage won't be shown. Click here to find out more.
Continue to review full report at Codecov.
|
DCO is incorrect. I have signed off all commits. |
My guess is that DCO doesn't like that I added a commit on my windows machine, which uses "@gmail.com" unlike my corp machine. |
@willbeason I see this got merged yesterday. Has this #108 (review) been addressed before merging into gk? |
Impact: no customer data loss, some controller fights mid-rollout as different controllers try to write the "correct" version of the CT. Controller fight goes away once the rollout completes. Symptoms of controller fight: increased requests to the API server, increasing resource version/generation. Controller fights could be mitigated by moving CT -> CRD creation to a singleton controller (place it on the audit pod, like the status controller is). |
Signed-off-by: Sertac Ozercan sozercan@gmail.com
adds total violations and enforcement action as part of
kubectl get
on constraints